Multi-Modal Evolutionary Deep Learning Model for Ovarian Cancer Diagnosis

نویسندگان

چکیده

Ovarian cancer (OC) is a common reason for mortality among women. Deep learning has recently proven better performance in predicting OC stages and subtypes. However, most of the state-of-the-art deep models employ single modality data, which may afford low-level due to insufficient representation important characteristics. Furthermore, these still lack optimization model construction, requires high computational cost train deploy them. In this work, hybrid evolutionary model, using multi-modal proposed. The established fusion framework amalgamates gene alongside with histopathological image modality. Based on different states forms each modality, we set up feature extraction network, respectively. This includes predictive antlion-optimized long-short-term-memory process longitudinal data. Another convolutional neural network included histopathology images. topology customized automatically by antlion algorithm make it realize performance. After that output from two improved networks fused based upon weighted linear aggregation. features are finally used predict stage. A number assessment indicators was compare proposed other nine constructed distinct algorithms. conducted benchmark benchmarks breast lung cancers. results reveal more precise accurate diagnosing

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Multi-Modal Multi-Task Deep Learning for Autonomous Driving

Several deep learning approaches have been applied to the autonomous driving task, many employing end-toend deep neural networks. Autonomous driving is complex, utilizing multiple behavioral modalities ranging from lane changing to turning and stopping. However, most existing approaches do not factor in the different behavioral modalities of the driving task into the training strategy. This pap...

متن کامل

Deep Multi-Modal Image Correspondence Learning

Inference of correspondences between images from different modalities is an extremely important perceptual ability that enables humans to understand and recognize crossmodal concepts. In this paper, we consider an instance of this problem that involves matching photographs of building interiors with their corresponding floorplan. This is a particularly challenging problem because a floorplan, a...

متن کامل

Multi-modal Deep Learning Approach for Flood Detection

In this paper we propose a multi-modal deep learning approach to detect floods in social media posts. Social media posts normally contain somemetadata and/or visual information, therefore in order to detect the floods we use this information. The model is based on a Convolutional Neural Network which extracts the visual features and a bidirectional Long Short-Term Memory network to extract the ...

متن کامل

High-order Deep Neural Networks for Learning Multi-Modal Representations

In multi-modal learning, data consists of multiple modalities, which need to be represented jointly to capture the real-world ’concept’ that the data corresponds to (Srivastava & Salakhutdinov, 2012). However, it is not easy to obtain the joint representations reflecting the structure of multi-modal data with machine learning algorithms, especially with conventional neural networks. This is bec...

متن کامل

Twitter Demographic Classification Using Deep Multi-modal Multi-task Learning

Twitter should be an ideal place to get a fresh read on how different issues are playing with the public, one that’s potentially more reflective of democracy in this new media age than traditional polls. Pollsters typically ask people a fixed set of questions, while in social media people use their own voices to speak about whatever is on their minds. However, the demographic distribution of us...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Symmetry

سال: 2021

ISSN: ['0865-4824', '2226-1877']

DOI: https://doi.org/10.3390/sym13040643